9,357 research outputs found

    A Constrained Transport Method for the Solution of the Resistive Relativistic MHD Equations

    Get PDF
    We describe a novel Godunov-type numerical method for solving the equations of resistive relativistic magnetohydrodynamics. In the proposed approach, the spatial components of both magnetic and electric fields are located at zone interfaces and are evolved using the constrained transport formalism. Direct application of Stokes' theorem to Faraday's and Ampere's laws ensures that the resulting discretization is divergence-free for the magnetic field and charge-conserving for the electric field. Hydrodynamic variables retain, instead, the usual zone-centred representation commonly adopted in finite-volume schemes. Temporal discretization is based on Runge-Kutta implicit-explicit (IMEX) schemes in order to resolve the temporal scale disparity introduced by the stiff source term in Ampere's law. The implicit step is accomplished by means of an improved and more efficient Newton-Broyden multidimensional root-finding algorithm. The explicit step relies on a multidimensional Riemann solver to compute the line-averaged electric and magnetic fields at zone edges and it employs a one-dimensional Riemann solver at zone interfaces to update zone-centred hydrodynamic quantities. For the latter, we introduce a five-wave solver based on the frozen limit of the relaxation system whereby the solution to the Riemann problem can be decomposed into an outer Maxwell solver and an inner hydrodynamic solver. A number of numerical benchmarks demonstrate that our method is superior in stability and robustness to the more popular charge-conserving divergence cleaning approach where both primary electric and magnetic fields are zone-centered. In addition, the employment of a less diffusive Riemann solver noticeably improves the accuracy of the computations.Comment: 25 pages, 14 figure

    Beyond topological persistence: Starting from networks

    Full text link
    Persistent homology enables fast and computable comparison of topological objects. However, it is naturally limited to the analysis of topological spaces. We extend the theory of persistence, by guaranteeing robustness and computability to significant data types as simple graphs and quivers. We focus on categorical persistence functions that allow us to study in full generality strong kinds of connectedness such as clique communities, kk-vertex and kk-edge connectedness directly on simple graphs and monic coherent categories.Comment: arXiv admin note: text overlap with arXiv:1707.0967

    A new approach to multiwavelength associations of astronomical sources

    Get PDF
    One of the biggest problems faced by current and next-generation astronomical surveys is trying to produce large numbers of accurate cross-identifications across a range of wavelength regimes with varying data quality and positional uncertainty. Until recently, simple spatial 'nearest neighbour' associations have been sufficient for most applications. However as advances in instrumentation allow more sensitive images to be made, the rapid increase in the source density has meant that source confusion across multiple wavelengths is a serious problem. The field of far-IR and sub-mm astronomy has been particularly hampered by such problems. The poor angular resolution of current sub-mm and far-IR instruments is such that in a lot of cases, there are multiple plausible counterparts for each source at other wavelengths. Here we present a new automated method of producing associations between sources at different wavelengths using a combination of spatial and spectral energy distribution information set in a Bayesian framework. Testing of the technique is performed on both simulated catalogues of sources from GaLICS and real data from multiwavelength observations of the Subaru-XMM Deep Field. It is found that a single figure of merit, the Bayes factor, can be effectively used to describe the confidence in the match. Further applications of this technique to future Herschel data sets are discusse

    Terrain classification by cluster analisys

    Get PDF
    The digital terrain modelling can be obtained by different methods belonging to two principal categories: deterministic methods (e.g. polinomial and spline functions interpolation, Fourier spectra) and stochastic methods (e.g. least squares collocation and fractals, i.e. the concept of selfsimilarity in probability). To reach good resul ts, both the fi rst and the second methods need same initial suitable information which can be gained by a preprocessing of data named terrain classification. In fact, the deterministic methods require to know how is the roughness of the terrain, related to the density of the data (elevations, deformations, etc.) used for the i nterpo 1 at ion, and the stochast i c methods ask for the knowledge of the autocorrelation function of the data. Moreover, may be useful or very necessary to sp 1 it up the area under consideration in subareas homogeneous according to some parameters, because of different kinds of reasons (too much large initial set of data, so that they can't be processed togheter; very important discontinuities or singularities; etc.). Last but not least, may be remarkable to test the type of distribution (normal or non-normal) of the subsets obtained by the preceding selection, because the statistical properties of the normal distribution are very important (e.g., least squares linear estimations are the same of maximum likelihood and minimum variance ones)
    • …
    corecore